Second Order Stochastic Optimization in Linear Time

نویسندگان

  • Naman Agarwal
  • Brian Bullins
  • Elad Hazan
چکیده

First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods for optimization problems in machine learning that match the per-iteration cost of gradient based methods, and in certain settings improves upon the overall running time upon the state-of-theart. Furthermore, our algorithm has the desirable property of being implementable in time linear in the sparsity of the input data. ∗[email protected], Computer Science, Princeton University †[email protected], Computer Science, Princeton University ‡[email protected], Computer Science, Princeton University ar X iv :1 60 2. 03 94 3v 4 [ st at .M L ] 1 4 O ct 2 01 6

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Numerical solution of second-order stochastic differential equations with Gaussian random parameters

In this paper, we present the numerical solution of ordinary differential equations (or SDEs), from each order especially second-order with time-varying and Gaussian random coefficients. We indicate a complete analysis for second-order equations in special case of scalar linear second-order equations (damped harmonic oscillators with additive or multiplicative noises). Making stochastic differe...

متن کامل

Second-Order Stochastic Optimization for Machine Learning in Linear Time

First-order stochastic methods are the state-of-the-art in large-scale machine learning optimization owing to efficient per-iteration complexity. Second-order methods, while able to provide faster convergence, have been much less explored due to the high cost of computing the second-order information. In this paper we develop second-order stochastic methods for optimization problems in machine ...

متن کامل

Solving single facility goal Weber location problem using stochastic optimization methods

Location theory is one of the most important topics in optimization and operations research. In location problems, the goal is to find the location of one or more facilities in a way such that some criteria such as transportation costs, customer traveling distance, total service time, and cost of servicing are optimized. In this paper, we investigate the goal Weber location problem in which the...

متن کامل

Stochastic Non-convex Optimization with Strong High Probability Second-order Convergence

In this paper, we study stochastic non-convex optimization with non-convex random functions. Recent studies on non-convex optimization revolve around establishing second-order convergence, i.e., converging to a nearly second-order optimal stationary points. However, existing results on stochastic non-convex optimization are limited, especially with a high probability second-order convergence. W...

متن کامل

Stochastic Bound Majorization

Recently a majorization method for optimizing partition functions of log-linear models was proposed alongside a novel quadratic variational upper-bound. In the batch setting, it outperformed state-of-the-art firstand second-order optimization methods on various learning tasks. We propose a stochastic version of this bound majorization method as well as a low-rank modification for highdimensiona...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:
  • CoRR

دوره abs/1602.03943  شماره 

صفحات  -

تاریخ انتشار 2016